39 research outputs found
Graph learning under sparsity priors
Graph signals offer a very generic and natural representation for data that
lives on networks or irregular structures. The actual data structure is however
often unknown a priori but can sometimes be estimated from the knowledge of the
application domain. If this is not possible, the data structure has to be
inferred from the mere signal observations. This is exactly the problem that we
address in this paper, under the assumption that the graph signals can be
represented as a sparse linear combination of a few atoms of a structured graph
dictionary. The dictionary is constructed on polynomials of the graph
Laplacian, which can sparsely represent a general class of graph signals
composed of localized patterns on the graph. We formulate a graph learning
problem, whose solution provides an ideal fit between the signal observations
and the sparse graph signal model. As the problem is non-convex, we propose to
solve it by alternating between a signal sparse coding and a graph update step.
We provide experimental results that outline the good graph recovery
performance of our method, which generally compares favourably to other recent
network inference algorithms
Learning parametric dictionaries for graph signals
In sparse signal representation, the choice of a dictionary often involves a
tradeoff between two desirable properties -- the ability to adapt to specific
signal data and a fast implementation of the dictionary. To sparsely represent
signals residing on weighted graphs, an additional design challenge is to
incorporate the intrinsic geometric structure of the irregular data domain into
the atoms of the dictionary. In this work, we propose a parametric dictionary
learning algorithm to design data-adapted, structured dictionaries that
sparsely represent graph signals. In particular, we model graph signals as
combinations of overlapping local patterns. We impose the constraint that each
dictionary is a concatenation of subdictionaries, with each subdictionary being
a polynomial of the graph Laplacian matrix, representing a single pattern
translated to different areas of the graph. The learning algorithm adapts the
patterns to a training set of graph signals. Experimental results on both
synthetic and real datasets demonstrate that the dictionaries learned by the
proposed algorithm are competitive with and often better than unstructured
dictionaries learned by state-of-the-art numerical learning algorithms in terms
of sparse approximation of graph signals. In contrast to the unstructured
dictionaries, however, the dictionaries learned by the proposed algorithm
feature localized atoms and can be implemented in a computationally efficient
manner in signal processing tasks such as compression, denoising, and
classification
Learning Laplacian Matrix in Smooth Graph Signal Representations
The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior
Tertiary Lymphoid Structures Generation through Graph-based Diffusion
Graph-based representation approaches have been proven to be successful in
the analysis of biomedical data, due to their capability of capturing intricate
dependencies between biological entities, such as the spatial organization of
different cell types in a tumor tissue. However, to further enhance our
understanding of the underlying governing biological mechanisms, it is
important to accurately capture the actual distributions of such complex data.
Graph-based deep generative models are specifically tailored to accomplish
that. In this work, we leverage state-of-the-art graph-based diffusion models
to generate biologically meaningful cell-graphs. In particular, we show that
the adopted graph diffusion model is able to accurately learn the distribution
of cells in terms of their tertiary lymphoid structures (TLS) content, a
well-established biomarker for evaluating the cancer progression in oncology
research. Additionally, we further illustrate the utility of the learned
generative models for data augmentation in a TLS classification task. To the
best of our knowledge, this is the first work that leverages the power of graph
diffusion models in generating meaningful biological cell structures
Multi-Graph Learning of Spectral Graph Dictionaries
We study the problem of learning constitutive features for the effective representation of graph signals, which can be considered as observations collected on different graph topologies. We propose to learn graph atoms and build graph dictionaries that provide sparse representations for classes of signals, which share common spectral characteristics but reside on the vertices of different graphs. In particular, we concentrate on graph atoms that are constructed on polynomials of the graph Laplacian. Such a design permits to abstract from the precise graph topology and to design dictionaries that can be trained and eventually used on different graphs. We cast the dictionary learning problem as an alternating optimization problem where the dictionary and the sparse representations of training signals are updated iteratively. Experimental results on synthetic graph signals representing common processes on graphs show that our dictionaries are able to capture the important components in graph signals. Further experiments on traffic data confirm the benefits of our dictionaries in the sparse approximation of signals capturing traffic bottlenecks
Mask Combination of Multi-layer Graphs for Global Structure Inference
Structure inference is an important task for network data processing and
analysis in data science. In recent years, quite a few approaches have been
developed to learn the graph structure underlying a set of observations
captured in a data space. Although real-world data is often acquired in
settings where relationships are influenced by a priori known rules, such
domain knowledge is still not well exploited in structure inference problems.
In this paper, we identify the structure of signals defined in a data space
whose inner relationships are encoded by multi-layer graphs. We aim at properly
exploiting the information originating from each layer to infer the global
structure underlying the signals. We thus present a novel method for combining
the multiple graphs into a global graph using mask matrices, which are
estimated through an optimization problem that accommodates the multi-layer
graph information and a signal representation model. The proposed mask
combination method also estimates the contribution of each graph layer in the
structure of signals. The experiments conducted both on synthetic and
real-world data suggest that integrating the multi-layer graph representation
of the data in the structure inference framework enhances the learning
procedure considerably by adapting to the quality and the quantity of the input
data
Comparison of time and frequency domain interpolation implementations for MB-OFDM UWB transmitters
This paper investigates the effect of time-domain (TD) and frequency-domain (FD) interpolation on the performance of a Multi-Band (MB) Orthogonal Frequency Division Multiplexing (OFDM) Ultra-Wideband (UWB) system. We introduce a FD interpolator implemented by a radix-8 512-point IFFT architecture for applications on MB-OFDM UWB transmitters. For the specific application where the interpolation factor is fixed to four, the FD interpolator outperforms the TD interpolator implemented with digital low-pass FIR filters in terms of computational complexity. On the other hand simulation results show that FD implementation degrades the overall system performance for certain UWB channels
Graph signal processing for machine learning: A review and new perspectives
The effective representation, processing, analysis, and visualization of
large-scale structured data, especially those related to complex domains such
as networks and graphs, are one of the key questions in modern machine
learning. Graph signal processing (GSP), a vibrant branch of signal processing
models and algorithms that aims at handling data supported on graphs, opens new
paths of research to address this challenge. In this article, we review a few
important contributions made by GSP concepts and tools, such as graph filters
and transforms, to the development of novel machine learning algorithms. In
particular, our discussion focuses on the following three aspects: exploiting
data structure and relational priors, improving data and computational
efficiency, and enhancing model interpretability. Furthermore, we provide new
perspectives on future development of GSP techniques that may serve as a bridge
between applied mathematics and signal processing on one side, and machine
learning and network science on the other. Cross-fertilization across these
different disciplines may help unlock the numerous challenges of complex data
analysis in the modern age